首页> 外文OA文献 >Separating containers from non-containers: A framework for learning behavior-grounded object categories
【2h】

Separating containers from non-containers: A framework for learning behavior-grounded object categories

机译:将容器与非容器分开:学习行为基础对象类别的框架

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Many tasks that humans perform on a daily basis require the use of a container. For example, tool boxes are used to store tools, carafes are used to serve beverages, and hampers are used to collect dirty clothes. One long term goal for the field of robotics is to create robots that can help people perform similar tasks. Yet, robots currently lack the ability to detect and use most containers. In order for a robot to have these capabilities, it must first form an object category for containers.This thesis describes a computational framework for learning a behavior-grounded object category for containers. The framework was motivated by the developmental progression of container learning in humans. The robot learns the category representation by interacting with objects and observing the resulting outcomes. It also learns a visual model for containers using the category labels from its behavior-grounded object category. This allows the robot to identify the category of a novel object using either interaction or passive observation.There are two main contributions of this thesis. The first contribution is the new behavior-grounded computational framework for learning object categories. The second contribution is that the visual model of an object category is acquired in the last step of this learning framework, after the robot has interacted with the objects. This is contrary to traditional approaches to object category learning, in which the visual model is learned first before the robot has even had the chance to touch the object. Because the visual model is learned in the last step, the robot can ascribe to a novel object the functional properties of its visually identified object category.
机译:人类每天执行的许多任务都需要使用容器。例如,工具箱用于存放工具,玻璃瓶用于提供饮料,而篮筐用于收集脏衣服。机器人技术领域的一项长期目标是创建可以帮助人们执行类似任务的机器人。然而,机器人目前缺乏检测和使用大多数容器的能力。为了使机器人具有这些功能,它必须首先形成容器的对象类别。本文介绍了一种用于学习基于行为的容器对象类别的计算框架。该框架受人类容器学习发展的推动。机器人通过与对象交互并观察结果来学习类别表示。它还从行为基础对象类别中使用类别标签学习了容器的可视模型。这使机器人可以通过交互观察或被动观察来识别新颖对象的类别。本论文有两个主要贡献。第一个贡献是用于学习对象类别的新的基于行为的计算框架。第二个贡献是,在机器人与对象进行交互之后,在此学习框架的最后一步中获取了对象类别的视觉模型。这与传统的对象类别学习方法相反,在传统方法中,视觉模型是在机器人甚至没有机会触摸对象之前就先学习的。因为在最后一步中学习了视觉模型,所以机器人可以将其视觉识别的对象类别的功能属性归因于新对象。

著录项

  • 作者

    Griffith, Shane David;

  • 作者单位
  • 年度 2011
  • 总页数
  • 原文格式 PDF
  • 正文语种 en
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号